Linear Convergence of Proximal Incremental Aggregated Gradient Methods under Quadratic Growth Condition
نویسنده
چکیده
Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition–a strictly weaker condition than the strongly convex assumption, we derive a new convergence result, which implies that the PIAG method attains global linear convergence rates in both the function value and iterate point errors. Moreover, by using the relative smoothness (recently proposed to weaken the traditional gradient Lipschitz continuity) and defining the Bregman distance growth condition (that generalizes the quadratic growth condition), we further analyze the PIAG method with general distance functions. Finally, we propose a new variant of the PIAG method with improved linear convergence rates. Our theory recovers many very recent results under strictly weaker assumptions, but also provides new results for both PIAG methods and the proximal gradient method. Besides, if the strongly convex assumption indeed holds, then our theory shows that one can improve the corresponding rates derived under the quadratic growth condition. The key idea behind our theory is to construct certain Lyapunov functions.
منابع مشابه
Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence under Bregman Distance Growth Conditions
We introduce a unified algorithmic framework, called proximal-like incremental aggregated gradient (PLIAG) method, for minimizing the sum of smooth convex component functions and a proper closed convex regularization function that is possibly non-smooth and extendedvalued, with an additional abstract feasible set whose geometry can be captured by using the domain of a Legendre function. The PLI...
متن کاملIncremental Aggregated Proximal and Augmented Lagrangian Algorithms
We consider minimization of the sum of a large number of convex functions, and we propose an incremental aggregated version of the proximal algorithm, which bears similarity to the incremental aggregated gradient and subgradient methods that have received a lot of recent attention. Under cost function differentiability and strong convexity assumptions, we show linear convergence for a sufficien...
متن کاملAdaptive Accelerated Gradient Converging Methods under Hölderian Error Bound Condition
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its app...
متن کاملAdaptive Accelerated Gradient Converging Method under H\"{o}lderian Error Bound Condition
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its app...
متن کاملAnalysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expr...
متن کامل